Goto

Collaborating Authors

 key variable


HARPA: A Testability-Driven, Literature-Grounded Framework for Research Ideation

Vasu, Rosni, Jansen, Peter, Siangliulue, Pao, Sarasua, Cristina, Bernstein, Abraham, Clark, Peter, Mishra, Bhavana Dalvi

arXiv.org Artificial Intelligence

While there has been a surge of interest in automated scientific discovery (ASD), especially with the emergence of LLMs, it remains challenging for tools to generate hypotheses that are both testable and grounded in the scientific literature. Additionally, existing ideation tools are not adaptive to prior experimental outcomes. We developed HARPA to address these challenges by incorporating the ideation workflow inspired by human researchers. HARPA first identifies emerging research trends through literature mining, then explores hypothesis design spaces, and finally converges on precise, testable hypotheses by pinpointing research gaps and justifying design choices. Our evaluations show that HARPA-generated hypothesis-driven research proposals perform comparably to a strong baseline AI-researcher across most qualitative dimensions (e.g., specificity, novelty, overall quality), but achieve significant gains in feasibility(+0.78, p$<0.05$, bootstrap) and groundedness (+0.85, p$<0.01$, bootstrap) on a 10-point Likert scale. When tested with the ASD agent (CodeScientist), HARPA produced more successful executions (20 vs. 11 out of 40) and fewer failures (16 vs. 21 out of 40), showing that expert feasibility judgments track with actual execution success. Furthermore, to simulate how researchers continuously refine their understanding of what hypotheses are both testable and potentially interesting from experience, HARPA learns a reward model that scores new hypotheses based on prior experimental outcomes, achieving approx. a 28\% absolute gain over HARPA's untrained baseline scorer. Together, these methods represent a step forward in the field of AI-driven scientific discovery.


Efficiently improving key weather variables forecasting by performing the guided iterative prediction in latent space

Li, Shuangliang, Li, Siwei

arXiv.org Artificial Intelligence

Weather forecasting refers to learning evolutionary patterns of some key upper-air and surface variables which is of great significance. Recently, deep learning-based methods have been increasingly applied in the field of weather forecasting due to their powerful feature learning capabilities. However, prediction methods based on the original space iteration struggle to effectively and efficiently utilize large number of weather variables. Therefore, we propose an 'encoding-prediction-decoding' prediction network. This network can efficiently benefit to more related input variables with key variables, that is, it can adaptively extract key variable-related low-dimensional latent feature from much more input atmospheric variables for iterative prediction. And we construct a loss function to guide the iteration of latent feature by utilizing multiple atmospheric variables in corresponding lead times. The obtained latent features through iterative prediction are then decoded to obtain the predicted values of key variables in multiple lead times. In addition, we improve the HTA algorithm in \cite{bi2023accurate} by inputting more time steps to enhance the temporal correlation between the prediction results and input variables. Both qualitative and quantitative prediction results on ERA5 dataset validate the superiority of our method over other methods. (The code will be available at https://github.com/rs-lsl/Kvp-lsi)


Scientists make first detection of exotic "X" particles in quark-gluon plasma

#artificialintelligence

In the first millionths of a second after the Big Bang, the universe was a roiling, trillion-degree plasma of quarks and gluons -- elementary particles that briefly glommed together in countless combinations before cooling and settling into more stable configurations to make the neutrons and protons of ordinary matter. In the chaos before cooling, a fraction of these quarks and gluons collided randomly to form short-lived "X" particles, so named for their mysterious, unknown structures. Today, X particles are extremely rare, though physicists have theorized that they may be created in particle accelerators through quark coalescence, where high-energy collisions can generate similar flashes of quark-gluon plasma. Now physicists at MIT's Laboratory for Nuclear Science and elsewhere have found evidence of X particles in the quark-gluon plasma produced in the Large Hadron Collider (LHC) at CERN, the European Organization for Nuclear Research, based near Geneva, Switzerland. The team used machine-learning techniques to sift through more than 13 billion heavy ion collisions, each of which produced tens of thousands of charged particles.


Can AI Predict Global Pandemics Like The Coronavirus?

#artificialintelligence

AI is supposed to be the most powerful pattern detection and prediction technology in the world. It therefore begs the question: Can we use AI to predict future global pandemics far enough in advance to tamp them down or prevent them altogether? It's an enormously consequential question, because the answer is not only relevant to sounding the alarm on future pandemics; it also speaks to the potential of AI for businesses. The short answer is "yes-ish." "Yes" because AI, specifically machine learning (ML), analyzes historical data to find the key variables that are predictive of any event, such as a pandemic.


Can AI predict global pandemics like the coronavirus? ZDNet

#artificialintelligence

AI is supposed to be the most powerful pattern detection and prediction technology in the world. It, therefore, begs the question: Can we use AI to predict future global pandemicsfar enough in advance to tamp them down or prevent them altogether? It's an enormously consequential question because the answer is not only relevant to sounding the alarm on future pandemics; it also speaks to the potential of AI for businesses. From cancelled conferences to disrupted supply chains, not a corner of the global economy is immune to the spread of COVID-19. The short answer is "yes-ish."


Foreseeing Armageddon: Could AI have predicted the Financial Crisis?

#artificialintelligence

The Global Financial Crisis (hereafter, GFC) of 2007–2008 had far reaching financial and legal consequences, affecting millions of livelihoods. Sparked by the proliferation of subprime mortgages and exemplified by fall of Lehman Brothers, it's aftershocks were widely felt around the world. Following a period of recovery and growth, the world plunged into the European Sovereign Debt Crisis (hereafter, ESDC) beginning in late 2009, the effects of which have been argued to be still ongoing today. As global markets tend to operate in a cyclical fashion, scenario planning for the next financial crisis is not a matter of if, but when. Indeed, a Google search would lead to hundreds of differing opinions on the matter, from conclusions based on quantitative metrics to others based on the prophecies of Nostrodamus.


The Hunt for Explainable AI

#artificialintelligence

The notion that we should understand how artificial intelligences make decisions is gaining increasing currency. As we face a future in which important decisions affecting the course of our lives may be made by artificial intelligence (AI), the idea that we should understand how AIs make decisions is gaining increasing currency. Which hill to position a 20-year-old soldier on, who gets (or does not get) a home mortgage, which treatment a cancer patient receives … such decisions, and many more, already are being made based on an often unverifiable technology. "The problem is that not all AI approaches are created equal," says Jeff Nicholson, a vice president at Pega Systems Inc., makers of AI-based Customer Relationship Management (CRM) software. "Certain'black box' approaches to AI are opaque and simply cannot be explained."